Generalizability to unseen forgery types is crucial for face forgery detectors. Recent works have made significant progress in terms of generalization by synthetic forgery data augmentation. In this work, we explore another path for improving the generalization. Our goal is to reduce the features that are easy to learn in the training phase, so as to reduce the risk of overfitting on specific forgery types. Specifically, in our method, a teacher network takes as input the face images and generates an attention map of the deep features by a diverse multihead attention ViT. The attention map is used to guide a student network to focus on the low-attended features by reducing the highly-attended deep features. A deep feature mixup strategy is also proposed to synthesize forgeries in the feature domain. Experiments demonstrate that, without data augmentation, our method is able to achieve promising performances on unseen forgeries and highly compressed data.
translated by 谷歌翻译
In this work, we investigate improving the generalizability of GAN-generated image detectors by performing data augmentation in the fingerprint domain. Specifically, we first separate the fingerprints and contents of the GAN-generated images using an autoencoder based GAN fingerprint extractor, followed by random perturbations of the fingerprints. Then the original fingerprints are substituted with the perturbed fingerprints and added to the original contents, to produce images that are visually invariant but with distinct fingerprints. The perturbed images can successfully imitate images generated by different GANs to improve the generalization of the detectors, which is demonstrated by the spectra visualization. To our knowledge, we are the first to conduct data augmentation in the fingerprint domain. Our work explores a novel prospect that is distinct from previous works on spatial and frequency domain augmentation. Extensive cross-GAN experiments demonstrate the effectiveness of our method compared to the state-of-the-art methods in detecting fake images generated by unknown GANs.
translated by 谷歌翻译
The success of deep learning is partly attributed to the availability of massive data downloaded freely from the Internet. However, it also means that users' private data may be collected by commercial organizations without consent and used to train their models. Therefore, it's important and necessary to develop a method or tool to prevent unauthorized data exploitation. In this paper, we propose ConfounderGAN, a generative adversarial network (GAN) that can make personal image data unlearnable to protect the data privacy of its owners. Specifically, the noise produced by the generator for each image has the confounder property. It can build spurious correlations between images and labels, so that the model cannot learn the correct mapping from images to labels in this noise-added dataset. Meanwhile, the discriminator is used to ensure that the generated noise is small and imperceptible, thereby remaining the normal utility of the encrypted image for humans. The experiments are conducted in six image classification datasets, consisting of three natural object datasets and three medical datasets. The results demonstrate that our method not only outperforms state-of-the-art methods in standard settings, but can also be applied to fast encryption scenarios. Moreover, we show a series of transferability and stability experiments to further illustrate the effectiveness and superiority of our method.
translated by 谷歌翻译
Indoor scenes typically exhibit complex, spatially-varying appearance from global illumination, making inverse rendering a challenging ill-posed problem. This work presents an end-to-end, learning-based inverse rendering framework incorporating differentiable Monte Carlo raytracing with importance sampling. The framework takes a single image as input to jointly recover the underlying geometry, spatially-varying lighting, and photorealistic materials. Specifically, we introduce a physically-based differentiable rendering layer with screen-space ray tracing, resulting in more realistic specular reflections that match the input photo. In addition, we create a large-scale, photorealistic indoor scene dataset with significantly richer details like complex furniture and dedicated decorations. Further, we design a novel out-of-view lighting network with uncertainty-aware refinement leveraging hypernetwork-based neural radiance fields to predict lighting outside the view of the input photo. Through extensive evaluations on common benchmark datasets, we demonstrate superior inverse rendering quality of our method compared to state-of-the-art baselines, enabling various applications such as complex object insertion and material editing with high fidelity. Code and data will be made available at \url{https://jingsenzhu.github.io/invrend}.
translated by 谷歌翻译
在存在未衡量的混杂因素的情况下,我们解决了数据融合的治疗效应估计问题,即在不同的治疗分配机制下收集的多个数据集。例如,营销人员可以在不同时间/地点为相同产品分配不同的广告策略。为了处理由未衡量的混杂因素和数据融合引起的偏见,我们建议将观察数据分为多组(每个组具有独立治疗分配机制),然后将组指标显式地模拟为潜在的组仪器变量(LATGIV),将其模拟为实施基于IV的回归。在本文中,我们概念化了这种思想,并开发了一个统一的框架,以(1)估计跨群体观察到的变量的分布差异; (2)对不同治疗分配机制的LATGIV模型; (3)插入latgivs以估计治疗响应函数。经验结果证明了与最新方法相比,LATGIV的优势。
translated by 谷歌翻译
基准数据集在评估自然语言理解(NLU)模型中起重要作用。但是,快捷方式(基准数据集中的不需要的偏差)可能会损害基准数据集在揭示模型的实际功能中的有效性。由于快捷方式在覆盖范围,生产率和语义含义上有所不同,因此NLU专家在创建基准数据集时系统地理解和避免它们是一项挑战。在本文中,我们开发了一个视觉分析系统,即短路,以帮助NLU专家探索NLU基准数据集中的快捷方式。该系统允许用户对快捷方式进行多层次探索。具体而言,统计信息视图可帮助用户掌握统计数据,例如基准数据集中快捷方式的覆盖范围和生产率。模板视图采用层次和可解释的模板来汇总不同类型的快捷方式。实例视图允许用户检查快捷方式涵盖的相应实例。我们进行案例研究和专家访谈,以评估系统的有效性和可用性。结果表明,饭店支持用户通过快捷方式更好地了解基准数据集问题,从而激发他们创建具有挑战性和相关的基准数据集。
translated by 谷歌翻译
准确的蛋白质结构预测可以显着加速生命科学的发展。 Alphafold2的准确性是边界端到端结构预测系统,已经接近实验确定技术的准确性。由于复杂的模型体系结构和大量的内存消耗,因此需要大量的计算资源和时间来实施从头开始实施Alphafold2的训练和推断。对于大多数个人和机构来说,运行原始AlphaFold2的成本都是昂贵的。因此,降低这一成本可以加速生命科学的发展。我们使用PaddlePaddle(即HelixFold)实现Alphafold2,以提高训练和推理速度并减少记忆消耗。操作员融合,张量融合和混合并行性计算改善了性能,而通过重新计算,BFLOAT16和内存读/写入/编写就场,内存进行了优化。与原始的Alphafold2(由JAX实施)和OpenFold(由Pytorch实施)相比,HelixFold仅需7.5天即可完成完整的端到端培训,并且在使用Hybrid ParalleleSism时只需要5.3天,而Alphafold2和OpenFold都可以使用11个。天。 Helixfold节省了1倍的训练时间。我们验证了HelixFold的准确性可能与CASP14和CAMAO数据集上的Alphafold2相当。 HelixFold的代码可免费下载:https://github.com/paddlepaddle/paddlehelix/paddlehelix/tree/dev/dev/pprotein_folding/helixfold,我们还在https://paddlehelix.baidu.com/com上提供稳定的Web服务。应用程序/药物/蛋白质/预测。
translated by 谷歌翻译
节流是当今在线广告市场中最受欢迎的预算控制方法之一。当一个受预算受限的广告商雇用节流功能时,她可以在广告平台建议出价后选择是否参加拍卖。本文重点介绍了从理论观点重复的第二价格拍卖中的动态预算节流过程。潜在问题的一个重要特征是,广告商不知道进入市场时竞争最高的出价。为了模拟消除这种不确定性的困难,我们考虑了两种不同的信息结构。广告商可以通过全信息反馈获得每轮竞争最高的投标。同时,通过部分信息反馈,广告商只能在她参加的拍卖中获得最高竞争的出价。我们提出了OGD-CB算法,该算法涉及在线广告查询面临的同时分配学习和收入优化。在这两种情况下,我们都证明该算法保证了$ O(\ sqrt {t \ log t})$遗憾,概率$ 1- o(1/t)$相对于流体自适应节流基准。通过证明$ \ omega(\ sqrt {t})$的下限在最小的后悔中,即使是最佳的最佳选择,我们就建立了算法的近乎最佳性。最后,我们将节流的最佳流体最佳与起搏相提并论,这是另一种广泛采用的预算控制方法。这些基准的数值关系使我们对不同的在线算法进行预算管理的比较有了进一步的见解。
translated by 谷歌翻译
一个良好的动作效果预测模型,称为环境模型,对于在机器人控制,推荐系统和患者治疗选择等许多领域中实现样本有效的决策政策学习非常重要。我们可以使用这种模型进行无限的试验来确定适当的行动,以便可以节省现实世界中的查询成本。它要求模型正确处理看不见的数据,也称为反事实数据。但是,标准数据拟合技术不会自动实现这种概括能力,通常会导致不可靠的模型。在这项工作中,我们在模型学习中引入了反事实风险最小化(CQRM),以推广到特定目标策略查询的反事实数据集。由于目标策略在政策学习中可能是各种各样且未知的,因此我们提出了一个对抗性CQRM目标,其中模型在对抗性策略查询的反事实数据上学习,并最终得出可拖延的解决方案Galileo。我们还发现,对抗性CQRM与对抗模型学习密切相关,从而解释了后者的有效性。我们将伽利略应用于综合任务和现实应用程序中。结果表明,伽利略对反事实数据做出了准确的预测,从而显着改善了现实世界测试的策略。
translated by 谷歌翻译
自我监督的语音表示学习在各种语音处理任务中显示出令人鼓舞的结果。但是,预先训练的模型,例如休伯特是存储密集型变压器,限制了其在低资源设置下的应用程序范围。为此,我们建议通过修剪结构化参数自动找到所需的体系结构Lighthubert,这是一个曾经是变压器的压缩框架。更确切地说,我们创建了一个基于变压器的超级网,该超网嵌套着数千个重量共享子网,并设计了一个两阶段的蒸馏策略,以利用休伯特的上下文化潜在表示。关于自动语音识别(ASR)和出色基准的实验表明,拟议的lighthubert可实现$ 10^9 $的架构,该体系结构涉及嵌入尺寸,注意力维度,头部编号,进率向前网络比率和网络深度。 Lighthubert优于ASR上的原始Hubert和Hubert大小的五个出色的任务,在大多数任务中,在大多数任务中都具有可比的性能,并减少了29%的参数,并获得了$ 3.5 \ times $ times $ compression $压缩比在三个超级任务中,例如自动扬声器验证,关键字发现和意图分类,略有准确的损失。代码和预培训模型可在https://github.com/mechanicalsea/lighthubert上找到。
translated by 谷歌翻译